Stay up to date with the latest OSINT news from around the world.

This week in open-source intelligence (OSINT) news, Hamas uses social media to sow fear and panic; false claims spread like wildfire on X’s Community Notes; experts scrutinize open-source data to make sense of what’s happening in the conflict zone, and how to tell a fake image from real photo evidence (hint, it’s not easy!)
 
This is the OSINT news of the week:

Hamas livestreams scenes of terror from their hostages’ accounts 

In a new war tactic, Hamas has seized the social media accounts of kidnapped Israelis and is using them to wage psychological warfare. Families of hostages watched in horror, as Hamas members logged into the personal social media accounts of their victims to livestream the Oct. 7 attacks. In the days that followed, Hamas also appeared to infiltrate their hostages’ Facebook groups, Instagram accounts, and WhatsApp chats to issue death threats and spread panic and fear.
 
It's not uncommon for extremist groups to turn to social media to further their causes by livestreaming attacks and posting propaganda, but using kidnapped victims’ personal accounts is particularly cruel, especially for the friends and family members of account holders. Instead of relying on social media as a lifeline that might offer clues about their missing loved ones, they are shocked to receive messages of terror and violence.
 

“[Hijacking hostages’ accounts] weaponizes social media in a way we’ve seen before – we are not psychologically prepared for this.”

- Thomas Rid, Professor of Strategic Studies at Johns Hopkins University

X’s Community Notes is spreading false information about Taylor Swift’s Bodyguard

A recent Community Note incident involving Taylor Swift’s bodyguard shows how easily a falsehood can be promoted via X’s crowdsourced system. On October 17, an Israeli newspaper Israel Hayom reported that an Israeli bodyguard who worked for Swift returned to his home country to volunteer as a reservist in the Defense Forces (IDF). Shortly after, a Community Note falsely claimed the bodyguard was never a part of Swift’s security detail.
 
The Note’s claim appears to be incorrect and easily disprovable. However, when the original Note received enough down ratings and was removed from display, another Community Note repeating the same falsehood appeared shortly to take its place – amassing over 1.4 million views. It demonstrates how lots of people can be shown Community Notes which do not add clarification but, in fact, add falsehoods to the original post. 
 

 

“To demonstrate the inaccuracy of the Community Note about Swift’s bodyguard, Bellingcat tracked down images across the internet showing that he appeared to be a regular member of her security detail dating back at least 17 months.”

- Kolina Koltai, Logan Williams and Sean Craig, contributing researchers for Bellingcat

Experts rely on publicly available content to investigate what's happening in a war zone

Social media platforms have been flooded with footage from Israel and Gaza. As brutal as it may be to watch, the imagery is a trove of information for researchers and investigators to help determine what happens in conflicts and war zones and for gathering evidence of potential war crimes and human rights violations. But the effort it takes to collect and verify such information is more complicated than ever, and no one piece of intelligence on its own is even enough to tell a complete and reliable story about an event or attack. 
 
The importance of verifying and analyzing open-source intelligence was highlighted in the aftermath of a deadly nighttime blast at the al-Ahli Arab Hospital in Gaza City on Oct. 17. U.S. intelligence later assessed the rocket originated from Gaza news agencies around the world continue to analyze both OSINT and eye-witness accounts to verify information in realtime.
 

“The closest thing we have to magic bullets is finding sources that we trust.”

- Andy Carvin, managing editor of the Atlantic Council's Digital Forensic Research Lab (DFRLab)

AI is making it ever harder to know what to believe

As AI tools advance, the real is quickly becoming indistinguishable from the fictional, fueling political disinformation and undermining efforts to authenticate everything from campaign gaffes to distant war crimes. The phrase “photographic evidence” risks becoming a relic of an ancient age.

Manipulated images are as old as photography itself. Tools like Adobe Photoshop and filters on social media apps have democratized visual trickery. But what AI is doing right now is taking it to another level. Image generators like Midjourney, DALL-E 3 and Stable Diffusion add accessibility, quality and scale – allowing anyone who can write a prompt to start producing fake images at an industrial scale. And while disinformation experts are mostly relaxed about the risk of AI images fooling broad swaths of the public, they see the real danger in people becoming more skeptical about the validity of any photographic evidence that’s presented to them. Anything can be declared “fake news” especially in dictatorial states, labeling every inconvenient fact as just a lie or “Western Propaganda”. 
 

“There’s so much reporting around fake news and around AI, that it creates this atmosphere where the legitimacy of a lot of things can be questioned.”

- Benjamin Strick, Director of Investigations at nonprofit the Centre for Information Resilience

Every other week, we collect OSINT news from around the world. We’re also gathering information on cyberthreats, federal intelligence strategies and much more. Find us on Twitter @Authentic8 and share the OSINT news you’re keeping up with.

To keep up to date on the latest OSINT and cyber security news, join our newsletter below.

Subscribe on LinkedIn

 

Tags
OSINT news